20 research outputs found

    An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network

    Get PDF
    Polychronous neural groups are effective structures for the recognition of precise spike-timing patterns but the detection method is an inefficient multi-stage brute force process that works off-line on pre-recorded simulation data. This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation. In this scheme, each neuron is assigned a randomized code that is used to tag the post-synaptic neurons whenever a spike is transmitted. This creates a polychronous code that preserves the order of pre-synaptic activity and can be registered in a hash table when the post-synaptic neuron spikes. A polychronous code is a sub-component of a polychronous group that will occur, along with others, when the group is active. We demonstrate the representational and pattern recognition ability of polychronous codes on a direction selective visual task involving moving bars that is typical of a computation performed by simple cells in the cortex. The computational efficiency of the proposed algorithm far exceeds existing polychronous group detection methods and is well suited for online detection.Comment: 17 pages, 8 figure

    Computational modeling of neural plasticity for self-organization of neural networks

    No full text
    Chrol-Cannon J, Jin Y. Computational modeling of neural plasticity for self-organization of neural networks. Biosystems. 2014;125:43-54.Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence

    On the Correlation between Reservoir Metrics and Performance for Time Series Classification under the Influence of Synaptic Plasticity

    No full text
    Chrol-Cannon J, Jin Y. On the Correlation between Reservoir Metrics and Performance for Time Series Classification under the Influence of Synaptic Plasticity. PLoS ONE. 2014;9(7): e101792.Reservoir computing provides a simpler paradigm of training recurrent networks by initialising and adapting the recurrent connections separately to a supervised linear readout. This creates a problem, though. As the recurrent weights and topology are now separated from adapting to the task, there is a burden on the reservoir designer to construct an effective network that happens to produce state vectors that can be mapped linearly into the desired outputs. Guidance in forming a reservoir can be through the use of some established metrics which link a number of theoretical properties of the reservoir computing paradigm to quantitative measures that can be used to evaluate the effectiveness of a given design. We provide a comprehensive empirical study of four metrics; class separation, kernel quality, Lyapunov's exponent and spectral radius. These metrics are each compared over a number of repeated runs, for different reservoir computing set-ups that include three types of network topology and three mechanisms of weight adaptation through synaptic plasticity. Each combination of these methods is tested on two time-series classification problems. We find that the two metrics that correlate most strongly with the classification performance are Lyapunov's exponent and kernel quality. It is also evident in the comparisons that these two metrics both measure a similar property of the reservoir dynamics. We also find that class separation and spectral radius are both less reliable and less effective in predicting performance

    Learning structure of sensory inputs with synaptic plasticity leads to interference

    No full text
    Chrol-Cannon J, Jin Y. Learning structure of sensory inputs with synaptic plasticity leads to interference. Frontiers in Computational Neuroscience. 2015;9.Synaptic plasticity is often explored as a form of unsupervised adaptation in cortical microcircuits to learn the structure of complex sensory inputs and thereby improve performance of classification and prediction. The question of whether the specific structure of the input patterns is encoded in the structure of neural networks has been largely neglected. Existing studies that have analyzed input-specific structural adaptation have used simplified, synthetic inputs in contrast to complex and noisy patterns found in real-world sensory data. In this work, input-specific structural changes are analyzed for three empirically derived models of plasticity applied to three temporal sensory classification tasks that include complex, real-world visual and auditory data. Two forms of spike-timing dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM) plasticity rule are used to adapt the recurrent network structure during the training process before performance is tested on the pattern recognition tasks. It is shown that synaptic adaptation is highly sensitive to specific classes of input pattern. However, plasticity does not improve the performance on sensory pattern recognition tasks, partly due to synaptic interference between consecutively presented input samples. The changes in synaptic strength produced by one stimulus are reversed by the presentation of another, thus largely preventing input-specific synaptic changes from being retained in the structure of the network. To solve the problem of interference, we suggest that models of plasticity be extended to restrict neural activity and synaptic modification to a subset of the neural circuit, which is increasingly found to be the case in experimental neuroscience

    An efficient method for online detection of polychronous patterns in spiking neural networks

    No full text
    Chrol-Cannon J, Jin Y, Grüning A. An efficient method for online detection of polychronous patterns in spiking neural networks. Neurocomputing. 2017;267:644-650.Polychronous neural groups are effective structures for the recognition of precise spike-timing patterns but the detection method is an inefficient multi-stage brute force process that works off-line on pre-recorded simulation data. This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation. In this scheme, each neuron is assigned a randomized code that is used to tag the post-synaptic neurons whenever a spike is transmitted. This creates a polychronous code that preserves the order of pre-synaptic activity and can be registered in a hash table when the post-synaptic neuron spikes. A polychronous code is a sub-component of a polychronous group that will occur, along with others, when the group is active. We demonstrate the representational and pattern recognition ability of polychronous codes on a direction selective visual task involving moving bars that is typical of a computation performed by simple cells in the cortex. By avoiding the structural and temporal analyses of polychronous group detection methods, the computational efficiency of the proposed algorithm is improved for pattern recognition by almost four orders of magnitude and is well suited for online detection

    The emergence of polychronous groups under varying input patterns, plasticity rules and network connectivities

    No full text
    Chrol-Cannon J, Gruning A, Jin Y. The emergence of polychronous groups under varying input patterns, plasticity rules and network connectivities. In: The 2012 International Joint Conference on Neural Networks (IJCNN). IEEE; 2012: 1-6.Polychronous groups are unique temporal patterns of neural activity that exist implicitly within non-linear, recurrently connected networks. Through Hebbian based learning these groups can be strengthened to give rise to larger chains of spatiotemporal activity. Compared to other structures such as Synfire chains, they have demonstrated the potential of a much larger capacity for memory or computation within spiking neural networks. Polychronous groups are believed to relate to the input signals under which they emerge. Here we investigate the quantity of groups that emerge from increasing numbers of repeating input patterns, whilst also comparing the differences between two plasticity rules and two network connectivities. We find - perhaps counter-intuitively - that fewer groups are formed as the number of repeating input patterns increases. Furthermore, we find that a tri-phasic learning rule gives rise to fewer groups than the `classical' double decaying exponential STDP plasticity window. It is also found that a scale-free network structure produces a similar quantity, but generally smaller groups than a randomly connected Erdös-Rényi structure

    The two predominantly studied STDP learning windows.

    No full text
    <p>The two predominantly studied STDP learning windows.</p

    Modeling neural plasticity in echo state networks for classification and regression

    No full text
    Yusoff M-H, Chrol-Cannon J, Jin Y. Modeling neural plasticity in echo state networks for classification and regression. Information Sciences. 2016;364-365:184-196.Echo state networks (ESNs) are one of two major neural network models belonging to the reservoir computing framework. Traditionally, only the weights connecting to the output neuron, termed read-out weights, are trained using a supervised learning algorithm, while the weights inside the reservoir of the ESN are randomly determined and remain unchanged during the training. In this paper, we investigate the influence of neural plasticity applied to the weights inside the reservoir on the learning performance of the ESN. We examine the influence of two plasticity rules, anti-Oja's learning rule and the Bienenstock–Cooper–Munro (BCM) learning rule on the prediction and classification performance when either offline or online supervised learning algorithms are employed for training the read-out connections. Empirical studies are conducted on two widely used classification tasks and two time series prediction problems. Our experimental results demonstrate that neural plasticity can more effectively enhance the learning performance when offline learning is applied. The results also indicate that the BCM rule outperforms the anti-Oja rule in improving the learning performance of the ENS in the offline learning mode

    Class separation results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.

    No full text
    <p>Class separation results for 10 initialisations for each combination of plasticity rule, connectivity method and time-series task.</p

    Pearson's Correlation between Metrics and Performance.

    No full text
    <p>Pearson's Correlation between Metrics and Performance.</p
    corecore